123 research outputs found

    Adults with dyslexia demonstrate large effects of crowding and detrimental effects of distractors in a visual tilt discrimination task

    Get PDF
    Previous research has shown that adults with dyslexia (AwD) are disproportionately impacted by close spacing of stimuli and increased numbers of distractors in a visual search task compared to controls [1]. Using an orientation discrimination task, the present study extended these findings to show that even in conditions where target search was not required: (i) AwD had detrimental effects of both crowding and increased numbers of distractors; (ii) AwD had more pronounced difficulty with distractor exclusion in the left visual field and (iii) measures of crowding and distractor exclusion correlated significantly with literacy measures. Furthermore, such difficulties were not accounted for by the presence of covarying symptoms of ADHD in the participant groups. These findings provide further evidence to suggest that the ability to exclude distracting stimuli likely contributes to the reported visual attention difficulties in AwD and to the aetiology of literacy difficulties. The pattern of results is consistent with weaker and asymmetric attention in AwD

    Audiovisual time perception is spatially specific

    Get PDF
    Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions

    Nitric Oxide Enhances Desiccation Tolerance of Recalcitrant Antiaris toxicaria Seeds via Protein S-Nitrosylation and Carbonylation

    Get PDF
    The viability of recalcitrant seeds is lost following stress from either drying or freezing. Reactive oxygen species (ROS) resulting from uncontrolled metabolic activity are likely responsible for seed sensitivity to drying. Nitric oxide (NO) and the ascorbate-glutathione cycle can be used for the detoxification of ROS, but their roles in the seed response to desiccation remain poorly understood. Here, we report that desiccation induces rapid accumulation of H2O2, which blocks recalcitrant Antiaris toxicaria seed germination; however, pretreatment with NO increases the activity of antioxidant ascorbate-glutathione pathway enzymes and metabolites, diminishes H2O2 production and assuages the inhibitory effects of desiccation on seed germination. Desiccation increases the protein carbonylation levels and reduces protein S-nitrosylation of these antioxidant enzymes; these effects can be reversed with NO treatment. Antioxidant protein S-nitrosylation levels can be further increased by the application of S-nitrosoglutathione reductase inhibitors, which further enhances NO-induced seed germination rates after desiccation and reduces desiccation-induced H2O2 accumulation. These findings suggest that NO reinforces recalcitrant seed desiccation tolerance by regulating antioxidant enzyme activities to stabilize H2O2 accumulation at an appropriate concentration. During this process, protein carbonylation and S-nitrosylation patterns are used as a specific molecular switch to control antioxidant enzyme activities

    Grouping by feature of cross-modal flankers in temporal ventriloquism

    Get PDF
    Signals in one sensory modality can influence perception of another, for example the bias of visual timing by audition: temporal ventriloquism. Strong accounts of temporal ventriloquism hold that the sensory representation of visual signal timing changes to that of the nearby sound. Alternatively, underlying sensory representations do not change. Rather, perceptual grouping processes based on spatial, temporal, and featural information produce best-estimates of global event properties. In support of this interpretation, when feature-based perceptual grouping conflicts with temporal information-based in scenarios that reveal temporal ventriloquism, the effect is abolished. However, previous demonstrations of this disruption used long-range visual apparent-motion stimuli. We investigated whether similar manipulations of feature grouping could also disrupt the classical temporal ventriloquism demonstration, which occurs over a short temporal range. We estimated the precision of participants’ reports of which of two visual bars occurred first. The bars were accompanied by different cross-modal signals that onset synchronously or asynchronously with each bar. Participants’ performance improved with asynchronous presentation relative to synchronous - temporal ventriloquism - however, unlike the long-range apparent motion paradigm, this was unaffected by different combinations of cross-modal feature, suggesting that featural similarity of cross-modal signals may not modulate cross-modal temporal influences in short time scales

    Multisensory Oddity Detection as Bayesian Inference

    Get PDF
    A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm – that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments – involving cues across and within modalities – for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new ‘oddity detection’ paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain

    A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics

    Get PDF
    The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result - the percept - depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a Bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven) factors as well as of top-down factors (induced by instruction manipulation) on both the perception process and the percept itself

    Atypical Balance between Occipital and Fronto-Parietal Activation for Visual Shape Extraction in Dyslexia

    Get PDF
    Reading requires the extraction of letter shapes from a complex background of text, and an impairment in visual shape extraction would cause difficulty in reading. To investigate the neural mechanisms of visual shape extraction in dyslexia, we used functional magnetic resonance imaging (fMRI) to examine brain activation while adults with or without dyslexia responded to the change of an arrow’s direction in a complex, relative to a simple, visual background. In comparison to adults with typical reading ability, adults with dyslexia exhibited opposite patterns of atypical activation: decreased activation in occipital visual areas associated with visual perception, and increased activation in frontal and parietal regions associated with visual attention. These findings indicate that dyslexia involves atypical brain organization for fundamental processes of visual shape extraction even when reading is not involved. Overengagement in higher-order association cortices, required to compensate for underengagment in lower-order visual cortices, may result in competition for top-down attentional resources helpful for fluent reading.Ellison Medical FoundationMartin Richmond Memorial FundNational Institutes of Health (U.S.). (Grant UL1RR025758)National Institutes of Health (U.S.). (Grant F32EY014750-01)MIT Class of 1976 (Funds for Dyslexia Research

    Rate after-effects fail to transfer cross-modally: evidence for distributed sensory timing mechanisms

    Get PDF
    Accurate time perception is critical for a number of human behaviours, such as understanding speech and the appreciation of music. However, it remains unresolved whether sensory time perception is mediated by a central timing component regulating all senses, or by a set of distributed mechanisms, each dedicated to a single sensory modality and operating in a largely independent manner. To address this issue, we conducted a range of unimodal and cross-modal rate adaptation experiments, in order to establish the degree of specificity of classical after- effects of sensory adaptation. Adapting to a fast rate of sensory stimulation typically makes a moderate rate appear slower (repulsive after-effect), and vice versa. A central timing hypothesis predicts general transfer of adaptation effects across modalities, whilst distributed mechanisms predict a high degree of sensory selectivity. Rate perception was quantified by a method of temporal reproduction across all combinations of visual, auditory and tactile senses. Robust repulsive after-effects were observed in all unimodal rate conditions, but were not observed for any cross-modal pairings. Our results show that sensory timing abilities are adaptable but, crucially, that this change is modality-specific - an outcome that is consistent with a distributed sensory timing hypothesis

    Activity in perceptual classification networks as a basis for human subjective time perception

    Get PDF
    Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, we build an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience
    corecore